Skip to content

fix: VLLM Supplier Dialogue Verification#2412

Merged
shaohuzhang1 merged 1 commit intomainfrom
pr@main@fix_vllm_chat
Feb 26, 2025
Merged

fix: VLLM Supplier Dialogue Verification#2412
shaohuzhang1 merged 1 commit intomainfrom
pr@main@fix_vllm_chat

Conversation

@shaohuzhang1
Copy link
Copy Markdown
Contributor

fix: VLLM Supplier Dialogue Verification

error=str(e)))
return True

def encryption_dict(self, model_info: Dict[str, object]):
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The code looks mostly well-written, but there's one issue that needs improvement: the exception handling for invalid calls to get_model might not be sufficient for all cases.

Improvement Suggestions:

  1. Ensure Proper Error Handling: Add checks after calling get_model() to verify if model is not null or returns an error value before attempting to invoke it. This can help prevent unexpected failures from unhandled errors.
    def is_valid(self, model_type: str, model_name, model_credential: Dict[str, object], model_params):
        # Ensure proper model retrieval and initial validation
        model = provider.get_model(model_type, model_name, model_credential, **model_params)
        if model is None:  # Handle case where model retrieval fails
            raise AppApiException(ValidCode.model_not_found.value,
                                 gettext('Model not found'))
    
        try:
            res = model.invoke([HumanMessage(content=gettext('Hello'))])
           # Print response for debugging purposes (optional)
        except Exception as e:
            raise AppApiException(ValidCode.valid_error.value,
                                gettext('Verification failed, please check whether the parameters are correct: {error}')
                                    .format(error=str(e)))
    
        return True

By adding these modifications, you improve robustness by checking if the model was successfully retrieved before proceeding with other operations, which can reduce runtime exceptions due to missing models. Also note that using None here is more precise than relying on truthiness for provider.get_model() responses.

@shaohuzhang1 shaohuzhang1 merged commit eb6f4e8 into main Feb 26, 2025
4 checks passed
@shaohuzhang1 shaohuzhang1 deleted the pr@main@fix_vllm_chat branch February 26, 2025 04:04
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant